29 research outputs found
Practical Lattice Cryptosystems: NTRUEncrypt and NTRUMLS
Public key cryptography, as deployed on the internet today, stands on shaky
ground. For over twenty years now it has been known that the systems in
widespread use are insecure against adversaries equipped with quantum computers
-- a fact that has largely been discounted due to the enormous challenge of
building such devices. However, research into the development of quantum
computers is accelerating and is producing an abundance of positive results
that indicate quantum computers could be built in the near future. As a
result, individuals, corporations and government entities are calling for the deployment of
new cryptography to replace systems that are vulnerable to quantum
cryptanalysis. Few satisfying schemes are to be found.
This work examines the design, parameter selection, and cryptanalysis of a
post-quantum public key encryption scheme, NTRUEncrypt, and a related
signature scheme, NTRUMLS. It is hoped that this analysis will prove useful in
comparing these schemes against other candidates that have been proposed to
replace existing infrastructure
An upper bound on the decryption failure rate of static-key NewHope
We give a new proof that the decryption failure rate of NewHope512 is at most . As in previous work, this failure rate is with respect to random, honestly generated, secret key and ciphertext pairs. However, our technique can also be applied to a fixed secret key. We demonstrate our technique on some subsets of the NewHope1024 key space, and we identify a large subset of NewHope1024 keys with failure rates of no more than
Improving post-quantum cryptography through cryptanalysis
Large quantum computers pose a threat to our public-key cryptographic infrastructure. The possible responses are:
Do nothing; accept the fact that quantum computers might be used to break widely deployed protocols.
Mitigate the threat by switching entirely to symmetric-key protocols.
Mitigate the threat by switching to different public-key protocols.
Each user of public-key cryptography will make one of these choices, and we should not expect consensus. Some users will do nothing---perhaps because they view the threat as being too remote. And some users will find that they never needed public-key cryptography in the first place.
The work that I present here is for people who need public-key cryptography and want to switch to new protocols. Each of the three articles raises the security estimate of a cryptosystem by showing that some attack is less effective than was previously believed. Each article thereby reduces the cost of using a protocol by letting the user choose smaller (or more efficient) parameters at a fixed level of security.
In Part 1, I present joint work with Samuel Jaques in which we revise security estimates for the Supersingular Isogeny Key Exchange (SIKE) protocol. We show that known quantum claw-finding algorithms do not outperform classical claw-finding algorithms. This allows us to recommend 434-bit primes for use in SIKE at the same security level that 503-bit primes had previously been recommended.
In Part 2, I present joint work with Martin Albrecht, Vlad Gheorghiu, and Eamonn Postelthwaite that examines the impact of quantum search on sieving algorithms for the shortest vector problem. Cryptographers commonly assume that the cost of solving the shortest vector problem in dimension is quantumly and classically. These are upper bounds based on a near neighbor search algorithm due to Becker--Ducas--Gama--Laarhoven. Naively, one might think that must be at least to avoid attacks that cost fewer than operations. Our analysis accounts for terms in the that were previously ignored. In a realistic model of quantum computation, we find that applying the Becker--Ducas--Gama--Laarhoven algorithm in dimension will cost more than operations. We also find reason to believe that the classical algorithm will outperform the quantum algorithm in dimensions .
In Part 3, I present solo work on a variant of post-quantum RSA. The original pqRSA proposal by Bernstein--Heninger--Lou--Valenta uses terabyte keys of the form where each is a -bit prime. My variant uses terabyte keys of the form where each is a -bit prime and is the -th prime. Prime generation is the most expensive part of post-quantum RSA in practice, so the smaller number of prime factors in my proposal gives a large speedup in key generation. The repeated factors help an attacker identify an element of small order, and thereby allow the attacker to use a small-order variant of Shor's algorithm. I analyze small-order attacks and discuss the cost of the classical pre-computation that they require
A Comparison of NTRU Variants
We analyze the size vs. security trade-offs that are available when selecting parameters for perfectly correct key encapsulation mechanisms based on NTRU
Multi-power Post-quantum RSA
Special purpose factoring algorithms have discouraged the adoption of multi-power RSA, even in a post-quantum setting. We revisit the known attacks and find that a general recommendation against repeated factors is unwarranted. We find that one-terabyte RSA keys of the form are competitive with one-terabyte RSA keys of the form . Prime generation can be made to be a factor of 100000 times faster at a loss of at least but not more than bits of security against known attacks. The range depends on the relative cost of bit and qubit operations under the assumption that qubit operations cost bit operations for some constant
Estimating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3
We investigate the cost of Grover's quantum search algorithm when used in the
context of pre-image attacks on the SHA-2 and SHA-3 families of hash functions.
Our cost model assumes that the attack is run on a surface code based
fault-tolerant quantum computer. Our estimates rely on a time-area metric that
costs the number of logical qubits times the depth of the circuit in units of
surface code cycles. As a surface code cycle involves a significant classical
processing stage, our cost estimates allow for crude, but direct, comparisons
of classical and quantum algorithms.
We exhibit a circuit for a pre-image attack on SHA-256 that is approximately
surface code cycles deep and requires approximately
logical qubits. This yields an overall cost of
logical-qubit-cycles. Likewise we exhibit a SHA3-256 circuit that is
approximately surface code cycles deep and requires approximately
logical qubits for a total cost of, again,
logical-qubit-cycles. Both attacks require on the order of queries in
a quantum black-box model, hence our results suggest that executing these
attacks may be as much as billion times more expensive than one would
expect from the simple query analysis.Comment: Same as the published version to appear in the Selected Areas of
Cryptography (SAC) 2016. Comments are welcome
Estimating quantum speedups for lattice sieves
Quantum variants of lattice sieve algorithms are routinely used to assess the security of lattice based cryptographic constructions. In this work we provide a heuristic, non-asymptotic, analysis of the cost of several algorithms for near neighbour search on high dimensional spheres. These algorithms are key components of lattice sieves. We design quantum circuits for near neighbour search algorithms and provide software that numerically optimises algorithm parameters according to various cost metrics. Using this software we estimate the cost of classical and quantum near neighbour search on spheres. For the most performant near neighbour search algorithm that we analyse we find a small quantum speedup in dimensions of cryptanalytic interest. Achieving this speedup requires several optimistic physical and algorithmic assumptions
Decryption failure is more likely after success
The user of an imperfectly correct lattice-based public-key encryption scheme leaks information about their secret key with each decryption query that they answer---even if they answer all queries successfully. Through a refinement of the D\u27Anvers--Guo--Johansson--Nilsson--Vercauteren--Verbauwhede failure boosting attack, we show that an adversary can use this information to improve his odds of finding a decryption failure. We also propose a new definition of -correctness, and we re-assess the correctness of several submissions to NIST\u27s post-quantum standardization effort
NTRU Modular Lattice Signature Scheme on CUDA GPUs
In this work we show how to use Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA) to accelerate a lattice based signature scheme, namely, the NTRU modular lattice signature (NTRU-MLS) scheme. Lattice based schemes require operations on large vectors that are perfect candidates for GPU implementations. In addition, similar to most lattice based signature schemes, NTRU-MLS provides transcript security with a rejection sampling technique. With a GPU implementation, we are able to generate many candidates simultaneously, and hence mitigate the performance slowdown from rejection sampling. Our implementation results show that for the original NTRU-MLS parameter sets, we obtain a 2x improvement in the signing speed; for the revised parameter sets, where acceptance rate of rejection sampling is down to around 1%, our implementation can be as much as 47x faster than a CPU implementation
Transcript secure signatures based on modular lattices
We introduce a class of lattice-based digital signature schemes
based on modular properties of the coordinates of lattice vectors. We also
suggest a method of making such schemes transcript secure via a rejection
sampling technique of Lyubashevsky (2009). A particular instantiation
of this approach is given, using NTRU lattices. Although the scheme is
not supported by a formal security reduction, we present arguments for
its security and derive concrete parameters (first version) based on the
performance of state-of-the-art lattice reduction and enumeration tech-
niques. In the revision, we re-evaluate the security of first version of the
parameter sets, under the hybrid approach of lattice reduction attack
the meet-in-the-middle attack. We present new sets of parameters that
are robust against this attack, as well as all previous known attacks